58 research outputs found

    Finite-Size Corrections for Ground States of Edwards-Anderson Spin Glasses

    Full text link
    Extensive computations of ground state energies of the Edwards-Anderson spin glass on bond-diluted, hypercubic lattices are conducted in dimensions d=3,..,7. Results are presented for bond-densities exactly at the percolation threshold, p=p_c, and deep within the glassy regime, p>p_c, where finding ground-states becomes a hard combinatorial problem. Finite-size corrections of the form 1/N^w are shown to be consistent throughout with the prediction w=1-y/d, where y refers to the "stiffness" exponent that controls the formation of domain wall excitations at low temperatures. At p=p_c, an extrapolation for d→∞d\to\infty appears to match our mean-field results for these corrections. In the glassy phase, w does not approach the value of 2/3 for large d predicted from simulations of the Sherrington-Kirkpatrick spin glass. However, the value of w reached at the upper critical dimension does match certain mean-field spin glass models on sparse random networks of regular degree called Bethe lattices.Comment: 6 pages, RevTex4, all ps figures included, corrected and final version with extended analysis and more data, such as for case d=3. Find additional information at http://www.physics.emory.edu/faculty/boettcher

    Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

    Full text link
    Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks. Despite its success, for large datasets, training and validating a single configuration often takes hours, days, or even weeks, which limits the achievable performance. To accelerate hyperparameter optimization, we propose a generative model for the validation error as a function of training set size, which is learned during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset. We construct a Bayesian optimization procedure, dubbed Fabolas, which models loss and training time as a function of dataset size and automatically trades off high information gain about the global optimum against computational cost. Experiments optimizing support vector machines and deep neural networks show that Fabolas often finds high-quality solutions 10 to 100 times faster than other state-of-the-art Bayesian optimization methods or the recently proposed bandit strategy Hyperband

    Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning

    Get PDF
    Automated Machine Learning (AutoML) supports practitioners and researchers with the tedious task of designing machine learning pipelines and has recently achieved substantial success. In this paper, we introduce new AutoML approaches motivated by our winning submission to the second ChaLearn AutoML challenge. We develop PoSH Auto-sklearn, which enables AutoML systems to work well on large datasets under rigid time limits by using a new, simple and meta-feature-free meta-learning technique and by employing a successful bandit strategy for budget allocation. However, PoSH Auto-sklearn introduces even more ways of running AutoML and might make it harder for users to set it up correctly. Therefore, we also go one step further and study the design space of AutoML itself, proposing a solution towards truly hands-free AutoML. Together, these changes give rise to the next generation of our AutoML system, Auto-sklearn 2.0. We verify the improvements by these additions in an extensive experimental study on 39 AutoML benchmark datasets. We conclude the paper by comparing to other popular AutoML frameworks and Auto-sklearn 1.0, reducing the relative error by up to a factor of 4.5, and yielding a performance in 10 minutes that is substantially better than what Auto-sklearn 1.0 achieves within an hour
    • …
    corecore